On the Duality Gap Convergence of ADMM Methods
نویسندگان
چکیده
This paper provides a duality gap convergence analysis for the standard ADMM as well as a linearized version of ADMM. It is shown that under appropriate conditions, both methods achieve linear convergence. However, the standard ADMM achieves a faster accelerated convergence rate than that of the linearized ADMM. A simple numerical example is used to illustrate the difference in convergence behavior.
منابع مشابه
Accelerated Variance Reduced Stochastic ADMM
Recently, many variance reduced stochastic alternating direction method of multipliers (ADMM) methods (e.g. SAGADMM, SDCA-ADMM and SVRG-ADMM) have made exciting progress such as linear convergence rates for strongly convex problems. However, the best known convergence rate for general convex problems is O(1/T ) as opposed to O(1/T ) of accelerated batch algorithms, where T is the number of iter...
متن کاملScalable Stochastic Alternating Direction Method of Multipliers
Alternating direction method of multipliers (ADMM) has been widely used in many applications due to its promising performance to solve complex regularization problems and large-scale distributed optimization problems. Stochastic ADMM, which visits only one sample or a mini-batch of samples each time, has recently been proved to achieve better performance than batch ADMM. However, most stochasti...
متن کاملOn the convergence rate of Newton interior-point methods in the absence of strict complementarity
In the absence of strict complementarity, Monteiro and Wright 7] proved that the convergence rate for a class of Newton interior-point methods for linear complementarity problems is at best linear. They also established an upper bound of 1=4 for the Q 1-factor of the duality gap sequence when the steplengths converge to one. In the current paper, we prove that the Q 1 factor of the duality gap ...
متن کاملOn the Convergence Rate of
In the absence of strict complementarity, Monteiro and Wright 7] proved that the convergence rate for a class of Newton interior-point methods for linear complementarity problems is at best linear. They also established an upper bound of 1=4 for the Q 1-factor of the duality gap sequence when the steplengths converge to one. In the current paper, we prove that the Q 1 factor of the duality gap ...
متن کاملAn Iterative Smoothing Algorithm for Regression with Structured Sparsity
High-dimensional regression or classification models are increasingly used to analyze biological data such as neuroimaging of genetic data sets. However, classical penalized algorithms produce dense solutions that are difficult to interpret without arbitrary thresholding. Alternatives based on sparsity-inducing penalties suffer from coefficient instability. Complex structured sparsity-inducing ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2015